innovation and technology
Demystify, Use, Reflect: Preparing students to be informed LLM-users
Chandrashekar, Nikitha Donekal, Nizamani, Sehrish Basir, Ellis, Margaret, Ramakrishnan, Naren
We transitioned our post-CS1 course that introduces various subfields of computer science so that it integrates Large Language Models (LLMs) in a structured, critical, and practical manner. It aims to help students develop the skills needed to engage meaningfully and responsibly with AI. The course now includes explicit instruction on how LLMs work, exposure to current tools, ethical issues, and activities that encourage student reflection on personal use of LLMs as well as the larger evolving landscape of AI-assisted programming. In class, we demonstrate the use and verification of LLM outputs, guide students in the use of LLMs as an ingredient in a larger problem-solving loop, and require students to disclose and acknowledge the nature and extent of LLM assistance. Throughout the course, we discuss risks and benefits of LLMs across CS subfields. In our first iteration of the course, we collected and analyzed data from students pre and post surveys. Student understanding of how LLMs work became more technical, and their verification and use of LLMs shifted to be more discerning and collaborative. These strategies can be used in other courses to prepare students for the AI-integrated future.
- North America > United States > New York > New York County > New York City (0.07)
- North America > United States > Virginia (0.06)
- Europe > Italy > Lombardy > Milan (0.06)
- (3 more...)
- Research Report (0.64)
- Instructional Material > Course Syllabus & Notes (0.47)
Adviser to UK minister claimed AI firms would never have to compensate creatives
A senior ministerial aide said AI companies would never have to compensate creatives for using their content to train their systems, in a statement that has alarmed campaigners demanding Labour deliver a fairer deal for musicians, artists and writers from the tech industry. Kirsty Innes, recently appointed as a special adviser to Liz Kendall, the secretary of state for science, innovation and technology, said "whether or not you philosophically believe the big AI firms should compensate content creators, they in practice will never legally have to". The government is consulting on how creatives should be compensated by AI firms and only last week leading British artists including Mick Jagger, Kate Bush and Paul McCartney urged Keir Starmer to stand up for creators' human rights and protect their work. Innes, who previously worked at the Tony Blair Institute (TBI) thinktank, has deleted the statement, which she posted to X in February, seven months before she became a ministerial adviser. In the deleted posts, seen by the Guardian, she said: "A lot of this has already happened and it can continue to happen outside the UK, whatever our laws say."
UK government urged to offer more transparency over OpenAI deal
Ministers are facing calls for greater transparency about public data that may be shared with the US tech company OpenAI after the government signed a wide-ranging agreement with the 300m ( 222m) company that critics compared to letting a fox into a henhouse. Chi Onwurah, the chair of the House of Commons select committee on science, innovation and technology, warned that Monday's sweeping memorandum of understanding between OpenAI's chief executive, Sam Altman, and the technology secretary, Peter Kyle, was "very thin on detail" and called for guarantees that public data would remain in the UK and clarity about how much of it OpenAI would have access to. The deal paves the way for the Silicon Valley firm behind ChatGPT to explore deploying advanced AI technology in areas including justice, defence and security, and education. It includes OpenAI and the government "partnering to develop safeguards that protect the public and uphold democratic values". Kyle said he wanted Britain to be "front and centre when it comes to developing and deploying AI" and "this can't be achieved without companies like OpenAI".
- Europe > United Kingdom (1.00)
- North America > United States > California (0.26)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Assessing the Prevalence of AI-assisted Cheating in Programming Courses: A Pilot Study
Abstract-- Tools that can generate computer code in response to inputs written in natural language, such as ChatGPT, pose an existential threat to Computer Science education in its current form, since students can now use these tools to solve assignments without much effort. While that risk has already been recognized by scholars, the proportion of the student body that is incurring in this new kind of plagiarism is still an open problem. We conducted a pilot study in a large CS class (n=120) to assess the feasibility of estimating AI plagiarism through anonymous surveys and interviews. More than 25% of the survey respondents admitted to committing AI plagiarism. Conversely, only one student accepted to be interviewed. Given the high levels of misconduct acknowledgment, we conclude that surveys are an effective method for studies on the matter, while interviews should be avoided or designed in a way that can entice participation. 1 INTRODUCTION Generative artificial intelligence (GenAI, not to be confused with general The generation is usually guided by an input text known as the "prompt". For example, giving the prompt "a vase of red flowers" to a GenAI model would generate an image depicting red flowers in a vase. Practical applications of GenAI are now mainstream thanks to advances in neural networks. In particular, the clever use of attention mechanisms and the subsequent development of the transformer architecture made efficient learning possible over large text corpora (Vaswani et al., 2023) . AI application based on a LLM, can convincingly engage in a conversation and answer questions across multiple subjects (OpenAI, 2022) . Research on applications of LLMs in education is still in its infancy, but looks promising. Personal tutoring systems (Chang, 2022), content explanation (Leinonen et al., 2023) and assignment generation ( Jury et al., 2024) are a few of the ideas that have been explored. From another perspective, LLMs are already a reality in schools.
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.93)
- Instructional Material > Course Syllabus & Notes (0.67)
- Education > Educational Setting (0.93)
- Education > Curriculum > Subject-Specific Education (0.49)
- Education > Educational Technology > Educational Software (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.69)
From Turing to Tomorrow: The UK's Approach to AI Regulation
Ritchie, Oliver, Anderljung, Markus, Rachman, Tom
The UK has pursued a distinctive path in AI regulation: less cautious than the EU but more willing to address risks than the US, and has emerged as a global leader in coordinating AI safety efforts. Impressive developments from companies like London-based DeepMind began to spark concerns in the UK about catastrophic risks from around 2012, although regulatory discussion at the time focussed on bias and discrimination. By 2022, these discussions had evolved into a "pro-innovation" strategy, in which the government directed existing regulators to take a light-touch approach, governing AI at point of use, but avoided regulating the technology or infrastructure directly. ChatGPT arrived in late 2022, galvanising concerns that this approach may be insufficient. The UK responded by establishing an AI Safety Institute to monitor risks and hosting the first international AI Safety Summit in 2023, but - unlike the EU - refrained from regulating frontier AI development in addition to its use. A new government was elected in 2024 which promised to address this gap, but at the time of writing is yet to do so. What should the UK do next? The government faces competing objectives: harnessing AI for economic growth and better public services while mitigating risk. In light of these, we propose establishing a flexible, principles-based regulator to oversee the most advanced AI development, defensive measures against risks from AI-enabled biological design tools, and argue that more technical work is needed to understand how to respond to AI-generated misinformation. We argue for updated legal frameworks on copyright, discrimination, and AI agents, and that regulators will have a limited but important role if AI substantially disrupts labour markets. If the UK gets AI regulation right, it could demonstrate how democratic societies can harness AI's benefits while managing its risks.
- North America > United States > California (0.28)
- Europe > France (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- (11 more...)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
- Banking & Finance > Economy (1.00)
AutoMCQ -- Automatically Generate Code Comprehension Questions using GenAI
Goodfellow, Martin, Booth, Robbie, Fagan, Andrew, Lambert, Alasdair
Students often do not fully understand the code they have written. This sometimes does not become evident until later in their education, which can mean it is harder to fix their incorrect knowledge or misunderstandings. In addition, being able to fully understand code is increasingly important in a world where students have access to generative artificial intelligence (GenAI) tools, such as GitHub Copilot. One effective solution is to utilise code comprehension questions, where a marker asks questions about a submission to gauge understanding, this can also have the side effect of helping to detect plagiarism. However, this approach is time consuming and can be difficult and/or expensive to scale. This paper introduces AutoMCQ, which uses GenAI for the automatic generation of multiple-choice code comprehension questions. This is integrated with the CodeRunner automated assessment platform.
- Europe > United Kingdom > Scotland > City of Glasgow > Glasgow (0.43)
- North America > United States > New York > New York County > New York City (0.07)
- Europe > Netherlands > Gelderland > Nijmegen (0.06)
- (3 more...)
Revealed: How the UK tech secretary uses ChatGPT for policy advice
Peter Kyle, the UK's secretary of state for science, innovation and technology, has said he uses ChatGPT to understand difficult concepts The UK's technology secretary, Peter Kyle, has asked ChatGPT for advice on why the adoption of artificial intelligence is so slow in the UK business community – and which podcasts he should appear on. This week, Prime Minister Keir Starmer said that the UK government should be making far more use of AI in an effort to increase efficiency. "No person's substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard," he said. Now, New Scientist has obtained records of Kyle's ChatGPT use under the Freedom of Information (FOI) Act, in what is believed to be a world-first test of whether chatbot interactions are subject to such laws. These records show that Kyle asked ChatGPT to explain why the UK's small and medium business (SMB) community has been so slow to adopt AI.
Multimodal Programming in Computer Science with Interactive Assistance Powered by Large Language Model
Gupta, Rajan Das, Hosain, Md. Tanzib, Mridha, M. F., Ahmed, Salah Uddin
LLM chatbot interfaces allow students to get instant, interactive assistance with homework, but doing so carelessly may not advance educational objectives. In this study, an interactive homework help system based on DeepSeek R1 is developed and first implemented for students enrolled in a large computer science beginning programming course. In addition to an assist button in a well-known code editor, our assistant also has a feedback option in our command-line automatic evaluator. It wraps student work in a personalized prompt that advances our educational objectives without offering answers straight away. We have discovered that our assistant can recognize students' conceptual difficulties and provide ideas, plans, and template code in pedagogically appropriate ways. However, among other mistakes, it occasionally incorrectly labels the correct student code as incorrect or encourages students to use correct-but-lesson-inappropriate approaches, which can lead to long and frustrating journeys for the students. After discussing many development and deployment issues, we provide our conclusions and future actions.
- Instructional Material > Course Syllabus & Notes (1.00)
- Research Report > New Finding (0.68)
- Education > Curriculum (0.50)
- Education > Instructional Theory > Educational Objectives (0.44)
The Role of Generative AI in Software Student CollaborAItion
Kiesler, Natalie, Smith, Jacqueline, Leinonen, Juho, Fox, Armando, MacNeil, Stephen, Ihantola, Petri
Collaboration is a crucial part of computing education. The increase Khan [28] has proposed an inspiring vision of how AI could in AI capabilities over the last couple of years is bound to profoundly help realize personalized individual tutors for every learner. Complementing affect all aspects of systems and software engineering, including this, an expert panel from 2020 [49] draws a scenario collaboration. In this position paper, we consider a scenario where where "AI supports orchestration of the multiple types of activities, AI agents would be able to take on any role in collaborative processes learning partners, and interaction patterns that can enrich a classroom". in computing education. We outline these roles, the activities We believe the possibilities are even broader, and to help and group dynamics that software development currently include, think about them, we propose a thought experiment that not only and discuss if and in what way AI could facilitate these roles and accommodates emerging practices and visions but also suggests activities. The goal of our work is to envision and critically examine new use cases in education that (to the best of our knowledge) have potential futures. We present scenarios suggesting how AI not yet been explored.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.14)
- (11 more...)
- Information Technology (0.68)
- Education > Curriculum > Subject-Specific Education (0.47)
- Education > Educational Setting (0.47)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.42)
Analyzing Chat Protocols of Novice Programmers Solving Introductory Programming Tasks with ChatGPT
Scholl, Andreas, Schiffner, Daniel, Kiesler, Natalie
The increasing need for competent computing graduates proficient in programming, software development, and related technical competencies [Ca17] is one of the factors exacerbating pressure on higher education institutions to offer high quality, competency-based education [Ra21]. However, the latter requires extensive resources, mentoring, and, for example, formative feedback for learners, especially in introductory programming classes [Je22; Lo24]. This is due to the fact that novices experience a number of challenges in the process, which have been subject to extensive research in the past decades [Du86; Lu18; SS86]. Among them are cognitively demanding competencies [Ki20; Ki24], such as problem understanding, designing and writing algorithms, debugging, and understanding error messages [Du86; ER16; Ki20; Lu18; SS86]). Educators' expectations towards novice learners and what they can achieve in their first semester(s) seem to be too high and unrealistic [Lu16; Lu18; WCL07]. Moreover, the student-educator ratio in introductory programming classes keeps increasing in German higher education institutions, thereby limiting resources to provide feedback and hints, and adequately address heterogeneous prior knowledge and diverse educational biographies [Pe16; SB22].
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.14)
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.14)
- North America > United States > New York (0.05)
- (6 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.93)